72 research outputs found

    Unsupervised Band Selection in Hyperspectral Images using Autoencoder

    Get PDF
    International audienceHyperspectral images provide fine details of the observed scene from the exploitation of contiguous spectral bands. However, the high dimensionality of hyperspectral images causes a heavy burden on processing. Therefore, a common practice that has been largely adopted is the selection of bands before processing. Thus, in this work, a new unsupervised approach for band selection based on autoencoders is proposed. During the training phase of the autoencoder, the input data samples have some of their features turned to zero, through a masking noise transform. The subsequent reconstruction error is assigned to the indices with masking noise. The bigger the error, the greater the importance of the masked features. The errors are then summed up during the whole training phase. At the end, the bands corresponding to the biggest indices are selected. A comparison with four other band selection approaches reveals that the proposed method yields better results in some specific cases and similar results in other situations

    Moving object detection and segmentation in urban environments from a moving platform

    Get PDF
    This paper proposes an effective approach to detect and segment moving objects from two time-consecutive stereo frames, which leverages the uncertainties in camera motion estimation and in disparity computation. First, the relative camera motion and its uncertainty are computed by tracking and matching sparse features in four images. Then, the motion likelihood at each pixel is estimated by taking into account the ego-motion uncertainty and disparity in computation procedure. Finally, the motion likelihood, color and depth cues are combined in the graph-cut framework for moving object segmentation. The efficiency of the proposed method is evaluated on the KITTI benchmarking datasets, and our experiments show that the proposed approach is robust against both global (camera motion) and local (optical flow) noise. Moreover, the approach is dense as it applies to all pixels in an image, and even partially occluded moving objects can be detected successfully. Without dedicated tracking strategy, our approach achieves high recall and comparable precision on the KITTI benchmarking sequences.This work was carried out within the framework of the Equipex ROBOTEX (ANR-10- EQPX-44-01). Dingfu Zhou was sponsored by the China Scholarship Council for 3.5 year’s PhD study at HEUDIASYC laboratory in University of Technology of Compiegne

    Entropy Based Multi-robot Active SLAM

    Full text link
    In this article, we present an efficient multi-robot active SLAM framework that involves a frontier-sharing method for maximum exploration of an unknown environment. It encourages the robots to spread into the environment while weighting the goal frontiers with the pose graph SLAM uncertainly and path entropy. Our approach works on a limited number of frontier points and weights the goal frontiers with a utility function that encapsulates both the SLAM and map uncertainties, thus providing an efficient and not computationally expensive solution. Our approach has been tested on publicly available simulation environments and on real robots. An accumulative 31% more coverage than similar state-of-the-art approaches has been obtained, proving the capability of our approach for efficient environment exploration.Comment: 14 pages, 9 figure

    Communicating Multi-UAV System for Cooperative SLAM-based Exploration

    Get PDF
    International audienceIn the context of multi-robot system and more generally for Technological System-of-Systems, this paper proposes a multi-UAV (Unmanned Aerial Vehicle) framework for SLAM-based cooperative exploration under limited communication bandwidth. The exploration strategy, based on RGB-D grid mapping and group leader decision making, uses a new utility function that takes into account each robot distance in the group from the unexplored set of targets, and allows to simultaneously explore the environment and to get a detailed grid map of specific areas in an optimized manner. Compared to state-of-the-art approaches, the main novelty is to exchange only the frontier points of the computed local grid map to reduce the shared data volume, and consequently the memory consumption. Moreover, communications constraints are taken into account within a SLAM-based multi-robot collective exploration. In that way, the proposed strategy is also designed to cope with communications drop-out or failures. The multi-UAV system is implemented into ROS and GAZEBO simulators on multiple computers provided with network facilities. Results show that the proposed cooperative exploration strategy minimizes the global exploration time by 25% for 2 UAVs and by 30% for 3 UAVs, while outperforming state-of-the-art exploration strategies based on both random and closest frontiers, and minimizing the average travelled distance by each UAV by 55% for 2 UAVs and by 62% for 3 UAVs. Furthermore, the system performance is also evaluated in a realistic test-bed comprising an infrastructure less network, which is used to support limited communications. The results of the test-bed show that the proposed exploration strategy uses 10 times less data than a strategy that makes the robots exchanging their whole local maps

    Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection

    Full text link
    In this paper, we improve the single-vehicle 3D object detection models using LiDAR by extending their capacity to process point cloud sequences instead of individual point clouds. In this step, we extend our previous work on rectification of the shadow effect in the concatenation of point clouds to boost the detection accuracy of multi-frame detection models. Our extension includes incorporating HD Map and distilling an Oracle model. Next, we further increase the performance of single-vehicle perception using multi-agent collaboration via Vehicle-to-everything (V2X) communication. We devise a simple yet effective collaboration method that achieves better bandwidth-performance tradeoffs than prior arts while minimizing changes made to single-vehicle detection models and assumptions on inter-agent synchronization. Experiments on the V2X-Sim dataset show that our collaboration method achieves 98% performance of the early collaboration while consuming the equivalent amount of bandwidth usage of late collaboration which is 0.03% of early collaboration. The code will be released at https://github.com/quan-dao/practical-collab-perception.Comment: Work in progres

    Uncertainty-Aware Adaptive Semi-Direct LiDAR Scan Matching for Ground Vehicle Positioning

    No full text
    International audienceIterative Closest Point (ICP) algorithms are widely used in the literature for the estimation of relative transformations using 3D LiDAR point clouds. This class of algorithms proves to be efficient when the 3D data share sufficient overlapping parts and a good initial guess is provided. However, large relative motions and mutual occlusion of objects in real-road scenarios hinder traditional optimization-based ICP from achieving optimal estimation. This paper explores both direct and feature-based 3D LiDAR scan matching using the ICP framework in different contexts, such as parking, residential, urban, and highway scenarios. In order to guarantee the scan matching performances in scenarios with scarce geometric information and fast ego-vehicle motion, we propose an adaptive semi-direct scan matching method together with an alignment uncertainty quantification. The proposed semi-direct scan matching is tested on both the public KITTI and self-recorded LS2N datasets, which accomplishes the robust 6 Degrees of Freedom (DoF) pose estimation and consistent scene reconstruction. We demonstrate that the proposed approach outperforms the state-of-the-art and achieves the leading results with 68.3 % average relative fitness and 5.71 cm average RMSE, respectively

    A Loosely Coupled Vision-LiDAR Odometry using Covariance Intersection Filtering

    No full text
    International audienceThis paper presents a loosely-coupled sensor fusion approach, which efficiently combines complementary visual and range sensor information to estimate the vehicle ego-motion. Descriptor-based and distance-based matching strategies are respectively applied to visual and range measurements for feature tracking. Nonlinear optimization optimally estimates the relative pose across consecutive frames and an uncertainty analysis using forward and backward covariance propagation is made to model the estimation accuracy. Covariance intersection filter paves the way for us to loosely couple stereo vision and LiDAR odometry considering respective uncertainties. We evaluate our approach with KITTI dataset which shows its effectiveness to fierce rotational motion and temporary absence of visual features, achieving the average relative translation error of 0.84% for the challenging 01 sequence on the highway
    • …
    corecore